Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

نویسندگان

چکیده

Random projections are able to perform dimension reduction efficiently for datasets with nonlinear low-dimensional structures. One well-known example is that random matrices embed sparse vectors into a subspace nearly isometrically, known as the restricted isometric property in compressed sensing. In this paper, we explore some applications of deep neural networks. We provide expressive power fully connected networks when input data or form smooth manifold. prove number neurons required approximating Lipschitz function prescribed precision depends on sparsity manifold and weakly vector. The key our proof stably set subspace. Based fact, also propose new network models, where at each layer first projected onto by projection then standard linear connection non-linear activation applied. way, parameters significantly reduced, therefore training can be accelerated without too much performance loss.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expressive power of recurrent neural networks

Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks – namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition – has been proven to have exponentially hi...

متن کامل

Fast Voltage and Power Flow Contingency Ranking Using Enhanced Radial Basis Function Neural Network

Deregulation of power system in recent years has changed static security assessment to the major concerns for which fast and accurate evaluation methodology is needed. Contingencies related to voltage violations and power line overloading have been responsible for power system collapse. This paper presents an enhanced radial basis function neural network (RBFNN) approach for on-line ranking of ...

متن کامل

Fast Neural Networks with Circulant Projections

The basic computation of a fully-connected neural network layer is a linear projection of the input signal followed by a non-linear transformation. The linear projection step consumes the bulk of the processing time and memory footprint. In this work, we propose to replace the conventional linear projection with the circulant projection. The circulant structure enables the use of the Fast Fouri...

متن کامل

On the Expressive Power of Deep Neural Networks

We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps alon...

متن کامل

Training Neural Networks by Using Power Linear Units (PoLUs)

In this paper, we introduce ”Power Linear Unit” (PoLU) which increases the nonlinearity capacity of a neural network and thus helps improving its performance. PoLU adopts several advantages of previously proposed activation functions. First, the output of PoLU for positive inputs is designed to be identity to avoid the gradient vanishing problem. Second, PoLU has a non-zero output for negative ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: CSIAM transaction on applied mathematics

سال: 2021

ISSN: ['2708-0560', '2708-0579']

DOI: https://doi.org/10.4208/csiam-am.so-2020-0004